[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA or autoscaling workload#50305
[clusteragent/autoscaling] Defer autoscaling stack startup until first DPA or autoscaling workload#50305davidor wants to merge 9 commits into
Conversation
Go Package Import DifferencesBaseline: 3227852
|
|
🎯 Code Coverage (details) 🔗 Commit SHA: 406db0c | Docs | Datadog PR Page | Give us feedback! |
Files inventory check summaryFile checks results against ancestor 3227852e: Results for datadog-agent_7.80.0~devel.git.838.406db0c.pipeline.113349055-1_amd64.deb:No change detected |
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
31 successful checks with minimal change (< 2 KiB)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 3227852 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | -0.52 | [-3.41, +2.36] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_memory | memory utilization | +0.45 | [+0.34, +0.56] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.29 | [+0.22, +0.36] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.22 | [+0.17, +0.26] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | +0.14 | [+0.04, +0.23] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.02 | [-0.09, +0.13] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.00 | [-0.05, +0.06] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.00 | [-0.21, +0.20] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.01 | [-0.20, +0.18] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.02 | [-0.15, +0.12] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.03 | [-0.07, +0.01] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.04 | [-0.49, +0.41] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.05 | [-0.26, +0.15] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.06 | [-0.58, +0.47] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.06 | [-0.46, +0.34] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | -0.07 | [-0.12, -0.02] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics | memory utilization | -0.11 | [-0.31, +0.10] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.16 | [-0.32, -0.00] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.22 | [-0.46, +0.01] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -0.27 | [-0.52, -0.03] | 1 | Logs bounds checks dashboard |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.45 | [-0.66, -0.25] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | -0.52 | [-3.41, +2.36] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.70 | [-0.85, -0.54] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | -1.40 | [-2.40, -0.40] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | observed_value | links |
|---|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | 703 ≥ 26 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | 245.20MiB ≤ 370MiB | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | 715 ≥ 26 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | 0.16GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_0ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | 0.20GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_1000ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | 0.17GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_100ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | 0.18GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_500ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | 144.81MiB ≤ 147MiB | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | 3 ≤ 4 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | 484.39MiB ≤ 495MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | 176.08MiB ≤ 195MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | 352.43 ≤ 2000 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | 3 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | 374.42MiB ≤ 430MiB | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
2e6d3d3 to
6592584
Compare
6592584 to
4a2acf7
Compare
|
@AlexanderYastrebov thanks for the review. I addressed your comments. |
### What does this PR do? This PR adds a new workloadmeta catalog specific to the cluster-agent. This is similar to what already exists for other agents: there's a catalog for dogstatsd, otel, etc. The reason I'm doing this is that the cluster-agent only needs one collector from the catalog: kubeapiserver. And this collector isn't needed in any other sub-agent. I think having a dedicated catalog makes the code easier to reason about, and avoids pulling in dependencies we don't need (not many in this case). This change also helps simplify another PR I have open: #50305 I think there's something else we can do about workloadmeta catalogs. There's a "global" catalog, but I think most places that use it could use the "core" catalog instead. I'll leave this for a future PR to avoid introducing too many changes at once. ### Describe how you validated your changes CI + deployed locally on a kind cluster. I verified that kubeapiserver collector still works in the DCA by checking `agent workload-list`. Also verified that the `agent check` command still works in the DCA (`agent check kubernetes_apiserver`). Co-authored-by: david.ortiz <david.ortiz@datadoghq.com>
4a2acf7 to
af2740e
Compare
|
Rebased on top of main to pick #50482 |
| if _, err := dynamicInformer.ForResource(workload.PodAutoscalerGVR).Informer().AddEventHandler(handlers); err != nil { | ||
| return fmt.Errorf("cannot add gate handler to DatadogPodAutoscaler informer: %w", err) | ||
| } | ||
| if _, err := dynamicInformer.ForResource(podAutoscalerClusterProfileGVR).Informer().AddEventHandler(handlers); err != nil { |
There was a problem hiding this comment.
This seems tricky to me. Some profiles are created OOTB (when autoscaling is started), it implies two thingsL
- That if you activated Autoscaling once (ever), it will always be activated.
- That people cannot use our OOTB profiles unless they either created a DPA or a Profile, while the goal of the OOTB profiles is to allow people to put labels on their workloads. It would imply that you also need to check for workloads with labels in the lazy start
There was a problem hiding this comment.
Thanks. I realized I was not really aware of how OOTB profiles are supposed to work.
I agree. The current approach doesn't work.
I see two possible alternatives:
- Instead of enabling the autoscaling components when a profile is detected, we should enable them when we detect one of the supported workloads (deployments, statefulsets, argo) or namespaces labeled with
autoscaling.datadoghq.com/profile. We could use a metadata-only informer for this. It would be a matter of replicating the informers already used for this in theWorkloadWatcher. These informers have a cost, but should be very low comparing to an informer for pods in a large cluster. - Just gate the pod informer part and start the rest of autoscaling components normally. This is a simple solution, just gate what we know should be the most expensive part by far, and start all the rest (including OOTB profiles reconciliation, etc.) normally. Simpler solution, but more things running for users that do not use autoscaling.
There was a problem hiding this comment.
I pushed a new commit that goes with option 1: [clusteragent/autoscaling/workload] Trigger gate on profile-labeled resources
gabedos
left a comment
There was a problem hiding this comment.
Autoscaling gate for DCA pod collection lgtm
### What does this PR do? This PR adds a new workloadmeta catalog specific to the cluster-agent. This is similar to what already exists for other agents: there's a catalog for dogstatsd, otel, etc. The reason I'm doing this is that the cluster-agent only needs one collector from the catalog: kubeapiserver. And this collector isn't needed in any other sub-agent. I think having a dedicated catalog makes the code easier to reason about, and avoids pulling in dependencies we don't need (not many in this case). This change also helps simplify another PR I have open: #50305 I think there's something else we can do about workloadmeta catalogs. There's a "global" catalog, but I think most places that use it could use the "core" catalog instead. I'll leave this for a future PR to avoid introducing too many changes at once. ### Describe how you validated your changes CI + deployed locally on a kind cluster. I verified that kubeapiserver collector still works in the DCA by checking `agent workload-list`. Also verified that the `agent check` command still works in the DCA (`agent check kubernetes_apiserver`). Co-authored-by: david.ortiz <david.ortiz@datadoghq.com>
af2740e to
998d394
Compare
JSGette
left a comment
There was a problem hiding this comment.
OK for @DataDog/agent-build scope
|
I rebased on top of main to fix conflicts. Also added a new commit to address @vboulineau comment in #50305 (comment) |
What does this PR do?
The goal of this PR is to allow enabling workload autoscaling without extra cost when it's not in use.
Right now, when autoscaling is enabled, the kubeapiserver workloadmeta collector starts a pod reflector among other things. In large clusters, this can use a lot of memory. This happens even if no DPA is deployed.
We want to avoid this memory usage when no DPAs are deployed.
This will let us enable workload autoscaling by default without extra cost when it's not in use. Users who want it will be able to create DPAs directly, without having to enable the option in the Cluster Agent.
This PR does not flip the
autoscaling.workload.enableddefault to true. That will be done in a separate PR so it can be reverted independently if needed.Describe how you validated your changes
Unit tests plus tests on a local kind cluster.
For the kind tests, I used kwok to simulate a large number of pods so the memory impact of the pod reflector would be measurable.
First, I deployed with autoscaling disabled to get a memory baseline. Then I deployed with autoscaling enabled but no DPAs, and checked that memory usage stayed close to the baseline. Finally, I created a DPA and checked that memory usage went up as expected.